adaptive non-monotone submodular maximization subject
Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint Supplementary Material
In this appendix, we include all the material missing from the main paper. Moreover, we restate a key result which connects random sampling and submodular maximization. The original version of the theorem was due to Feige et al. In fact, in what follows we exclusively use S and O for their final versions. Before stating the next lemma, let us introduce some notation for the sake of readability.
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > Canada (0.04)
Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n \log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a 9-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint Supplementary Material
In this appendix, we include all the material missing from the main paper. Moreover, we restate a key result which connects random sampling and submodular maximization. The original version of the theorem was due to Feige et al. In fact, in what follows we exclusively use S and O for their final versions. Before stating the next lemma, let us introduce some notation for the sake of readability.
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > Canada (0.04)
Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a 5.83 approximation and runs in O(n \log n) time, i.e., at least a factor n faster than other state-of-the-art algorithms.